IEEE Access
● Institute of Electrical and Electronics Engineers (IEEE)
Preprints posted in the last 90 days, ranked by how well they match IEEE Access's content profile, based on 31 papers previously published here. The average preprint has a 0.03% match score for this journal, so anything above that is already an above-average fit.
Chato, L.; Kagozi, A.
Show abstract
Accurate diagnosis of cardiac abnormalities from electrocardiogram signals remains a central challenge in automated cardiovascular assessment. This study investigates the efficiency of time-frequency representations and deep learning architectures in classifying 12-lead ECGs into five diagnostic super-classes using the PTB-XL dataset. Continuous Wavelet Transform is applied to generate time- frequency representations, scalograms and phasograms, representing spectral energy and phase distributions, respectively. We experiment with both early and late information fusion strategies using several convolutional and transformer-based networks of a custom Convolutional Neural Network, Hybrid Deep Learning, transfer learning, feature fusion, and ensemble modeling, and weighted loss strategies. An ensemble fusion of models trained on time-frequency representation and time representation achieved the best overall performance of Area Under Curve of 0.9233 surpassing individual modalities. To improve the results further, weighted focal loss is used to improve the low classification rates in some labels due to imbalanced data. The results highlight the potential of multi-representation wavelet fusion for interpretable and generalizable ECG classification.
Lavezzo, L.; Grandjean, D.; Delplanque, S.; Barcos-Munoz, F.; Borradori-Tolsa, C.; Scilingo, E. P.; Filippa, M.; Nardelli, M.
Show abstract
Synchrony is a key mechanism that builds up the foundations of human interactions. Quantifying the level of physiological synchronization that occurs during dyadic exchanges is essential to fully comprehend social phenomena. We present a new index to characterize the coupling of complex physiological dynamics: the optimized Multichannel Complexity Index (opMCI). We validated this approach using synthetic time series of two coupled Henon Maps, with four different coupling levels in unidirectional and bidirectional manners. We demonstrated that the opMCI method allows to effectively discern between all coupling levels. Then, we applied the opMCI metric on heart rate variability data collected from 37 parent-infant dyads, during shared reading and playing activities, in the framework of the Shared Emotional Reading (SHER) project, with the aim of assessing the effects of early intervention in preterm babies. Two groups presented preterm infants: an intervention group, who participated in a two-month shared reading program, and a control group, who practiced shared play activities. A full-term group provided additional control data. The opMCI values were significantly higher for the intervention dyads with respect to the other groups during the shared reading task, showing that an early reading intervention program could increase parent-infant synchrony in preterm babies.
Shukla, A.; Rao, A.; Siddharth, S.; Bao, R.
Show abstract
Chest radiography (CXR) is a primary modality for assessing cardiopulmonary conditions, but its effectiveness is limited by anatomical obstructions (e.g., ribs, clavicles) that hinder accurate pneumothorax segmentation, boundary delineation, and severity estimation. While deep learning-based bone suppression improves soft-tissue visibility, its utility for precise pixel-wise localization remains underexplored. This study investigates the downstream application of bone suppression for pneumothorax segmentation, integrating it as a preprocessing step to mitigate bony obscuration. We evaluate its impact across CNN and Vision Transformer models on two public datasets, where models trained on bone-suppressed CXRs significantly outperform (p < 0.05) non-suppressed counterparts, achieving up to 17% improvement in Mean Average Surface Distance (MASD), 4.9% in Dice Similarity Coefficient (DSC), and 5.9% in Normalized Surface Dice (NSD), alongside a 9.5% gain in Matthews Correlation Coefficient (MCC). These results demonstrate bone suppression as an architecture-independent enhancement for pneumothorax localization, improving the reliability of automated CXR interpretation.
Collins, S. H.; De Groote, F.; Gregg, R. D.; Huang, H.; Lenzi, T.; Sartori, M.; Sawicki, G. S.; Si, J.; Slade, P.; Young, A. J.
Show abstract
In "Experiment-free exoskeleton assistance via learning in simulation", Luo et al. [1] present an ambitious framework for developing exoskeleton controllers through reinforcement learning exclusively in computer simulation. The authors report that a control policy trained on a small dataset from one subject was directly transferred to physical hardware, reducing human metabolic cost during walking, running, and stair climbing by more than any prior device. If confirmed, this would represent a major breakthrough for the field of wearable robotics and their clinical applications. However, a close examination of the published materials casts doubt on these claims. The reported experimental results violate physiological limits on the relationship between mechanical power and muscle energy use during gait2,3,4. The algorithmic claims are surprising and cannot be verified; in contrast with established replicability standards in machine learning5,6, executable code has not been made available. We conclude that the goals of this study have not yet been verifiably achieved and make recommendations for avoiding publication errors of this type in the future.
Balakrishna, K.; Hammond, A.; Cheruku, S.; Das, A.; Saggu, M.; Thakur, N. A.; Urrea, R.; Zhu, H.
Show abstract
I.AO_SCPLOWBSTRACTC_SCPLOWCoronary Artery Disease (CAD) is a leading cause of cardiovascular-related mortality and affects 20.5 million people in the United States and approximately 315 million people worldwide in 2022. The asymptomatic and progressive nature of CAD presents challenges for early diagnosis and timely intervention. Traditional diagnostic methods such angiography and stress tests are known to be resource-intensive and prone to human error. This calls for a need for automated and time-effective detection methods. In this paper, this paper introduces a novel approach to the diagnosis of CAD based on a Convolutional Neural Network (CNN) with a temporal attention mechanism. The model will be developed on an architecture that will automatically extract and emphasize critical features from sequential medical imaging data from coronary angiograms, allowing subtle signs of CAD to be easily spotted, which could not have been detected by convention. The temporal attention mechanism strengthens the ability of a model to focus on relevant temporal patterns, thus improving sensitivity and robustness in detecting CAD for various stages of the disease. Experimental validation on a large and diverse dataset demonstrates the efficacy of the proposed method, with significant improvements in both detection accuracy and processing time compared to traditional CNN architectures. The results of this study propose a scalable solution system for the diagnosis of CAD. This proposed system can be integrated into clinical workflows to assist healthcare professionals. Ultimately, this research contributes to the field of AI-driven healthcare solutions and has the potential to reduce the global burden of CAD through early automated detection.
Korenic, A.; Özkaya, U.; Capar, A.
Show abstract
Background and ObjectiveVariational Autoencoders (VAEs) offer a powerful framework for unsupervised anomaly detection and data clustering, often surpassing traditional methods. A core strength of VAEs lies in their ability to model data distributions probabilistically, enabling robust identification of anomalies and clusters through reconstruction likelihood -- a stochastic metric providing a principled alternative to deterministic error scores. MethodsWe investigated how different VAE architectures, combining reconstruction likelihood with a learnable or data-driven prior, performed in a clustering task on a toy dataset such as MNIST. Results were verified using dimensionality reduction techniques like t-distributed Stochastic Neighbor Embedding (t-SNE) and Uniform Manifold Approximation and Projection (UMAP), alongside clustering algorithms such as k-means and Hierarchical Density-Based Spatial Clustering of Applications with Noise (HDBSCAN). ResultsThe VAEs encoder inherently maps data points into a latent space exhibiting discernible cluster structure, as evidenced by alignment with ground truth labels. While dimensionality reduction techniques (both t-SNE and UMAP) facilitated the application of clustering algorithms (k-means and HDBSCAN), these methods were primarily used to visualize and interpret the latent space organization. ConclusionsThis study demonstrates that VAEs effectively cluster data by implicitly encoding assignments in their latent representations. Determining cluster membership from encoder output, combined with reconstruction likelihood using semantic features, offers a principled approach for identifying typical samples and anomalies. Future research should focus on leveraging this inherent clustering capability of VAEs to enhance interpretability and facilitate clinical application.
Ruth, P. S.; DeBenedetti, T.; O'Brien, L.; Landay, J. A.; Coleman, T.; Fox, E. B.
Show abstract
Vascular waveforms, which measure bulk flow in blood vessels, are widely used to measure vital signs, diagnose conditions, and predict long-term health outcomes. Analyzing vascular waveforms depends on three fundamentally interdependent tasks: signal filtering, pulse timing detection, and pulse shape extraction. We hypothesized that Bayesian pulse deconvolution can achieve improved performance on all three tasks by solving them jointly. This method uses an analytical, generative model of vascular waveforms with priors informed by physical and biological domain knowledge. In simulations, Bayesian pulse deconvolution achieves better performance on all tasks compared with existing algorithms: 90% reduction of median filtering error, 60% reduction in pulse timing error, and 85% reduction in shape extraction error. The advantages in simulations extend to human recordings of photoplethysmography waveforms. Taking real time-synchronized electrocardiogram R-R intervals as a proxy ground truth, Bayesian pulse deconvolution achieves 40% lower pulse interval estimation error (RMSE = 5.1 ms) compared with typical algorithms (RMSE = 8.3 ms, p=1e-10). By extracting more accurate and informative insights from vascular waveforms, Bayesian pulse deconvolution could advance a wide array of health technologies that rely on interpreting signals from blood vessels.
Lin, D.; Mussavi Rizi, M.; O'Neill, C.; Lotz, J. C.; Anderson, P.; Torres Espin, A.
Show abstract
Causal discovery algorithms are often leveraged for inferring causal relationships and recovering a causal model from data. However, causal discovery from data alone is limited by the structural constraints of the used dataset, the lack of causal logic, and the lack of external knowledge. Thus, data-driven causal discovery can only suggest possible causal relationships at best. To overcome these limitations, Large Language Models (LLMs) and knowledge systems, such as Retrieval-Augmented Generation (RAG), have been proposed as alternatives to data-driven causal discovery and as a method to augment causal discovery algorithms. Using an expert-defined causal graph of chronic lower back pain, we further propose knowledge graph based RAG systems, such as GraphRAG, as an improvement over RAG systems for augmenting causal discovery (F1 0.745), benchmarking its performance against augmenting causal discovery with an LLM (F1 0.636), augmenting causal discovery with RAG (F1 0.714), and causal discovery alone (F1 0.396). We also explore the impact of different prompting methods for causality, such as querying for the plausibility of causal relationships, the presence of statistical associations, and the existence of temporal causal relationships, as inspired by the methodology of the domain experts constructing our ground truth. Lastly, we discuss how applications of LLMs, RAG, and graph-based RAG systems can impact and accelerate the causal modeling of chronic lower back pain by bridging the gap between domain knowledge and data driven approaches to causal modeling. Graphical Abstract O_FIG O_LINKSMALLFIG WIDTH=200 HEIGHT=93 SRC="FIGDIR/small/26346255v1_ufig1.gif" ALT="Figure 1"> View larger version (31K): org.highwire.dtl.DTLVardef@f3387org.highwire.dtl.DTLVardef@2dforg.highwire.dtl.DTLVardef@bc839aorg.highwire.dtl.DTLVardef@63f6ea_HPS_FORMAT_FIGEXP M_FIG C_FIG
Ajadi, N. A.; Afolabi, S. O.; Adenekan, I. O.; Jimoh, A. O.; Ajayi, A. O.; Adeniran, T. A.; Adepoju, G. D.; Hassan, N. F.; Ajadi, S. A.
Show abstract
This research presents multimodal deep learning for structural heart disease prediction. We evaluated multiple deep learning architectures, including TCN, Simple CNN, ResNet1d18, Light transformer and Hybrid model. The models were examined across the three seeds to ensure robustness, and bootstrap confidence interval is used to measure performance differences. TCN consistently outperforms other competing architectures, achieving statistically significant improvements with stable performance across runs. Similarly in predictive analysis, TCN has efficient computation and stable training compared to all competing architectures. Our results show that TCN emphasizes fairness evaluation when developing deep learning models for healthcare applications.
Ramirez-Torano, F.; Hatlestad-Hall, C.; Drews, A.; Renvall, H.; Rossini, P. M.; Marra, C.; Haraldsen, I. H.; Maestu, F.; Bruna, R.
Show abstract
Electroencephalography (EEG) preprocessing is a critical yet time-consuming step that often relies on expert-driven, semi-automatic pipelines, limiting scalability and reproducibility across large datasets. In this work, we present sEEGnal, a fully automated and modular pipeline for EEG preprocessing designed to produce outputs comparable to expert-driven analyses while ensuring consistency and computational efficiency. The pipeline integrates three main modules: data standardization following the EEG extension of the Brain Imaging Data Structure (BIDS), bad channel detection, and artifact identification, combining physiologically grounded criteria with independent component analysis and ICLabel-based classification. Performance was evaluated against manual preprocessing performed by EEG experts at two complementary levels: preprocessing metadata (bad channels, artifact duration, and rejected components) and EEG-derived measures. In addition, test-retest analyses were conducted to assess the stability of the pipeline across repeated recordings. Results show that sEEGnal achieves performance comparable to expert-driven preprocessing while preserving key neurophysiological features. Furthermore, the pipeline demonstrates reduced variability and increased consistency compared to human experts. These findings support sEEGnal as a robust and scalable solution for automated EEG preprocessing in both research and large-scale applications. HighlightsFully automated and modular EEG preprocessing pipeline. Benchmarked against expert-driven preprocessing. Comparable performance in metadata and EEG-derived measures. Demonstrates stable performance in test-retest recordings. BIDS-based framework for reproducible EEG data handling.
Liu, J.; Fan, J.; Deng, Z.; Tang, X.; Zhang, H.; Sharma, A.; Li, Q.; Liang, C.; Wang, A. Y.; Liu, L.; Luo, K.; Liu, H.; Qiu, H.
Show abstract
Background: Patient-ventilator synchrony, an essential prerequisite for non-invasive mechanical ventilation, requires an accurate matching of every phase of the respiration between patient and the ventilator. Methods: We developed a long short-term memory (LSTM)-based model that can predict the inspiratory and expiratory time of the patient. This model consisted of two hidden layers, each with eight LSTM units, and was trained using a dataset of approximately 27000 of 500-ms-long flow signals that captured both inspiratory and expiratory events. Results: The LSTM model achieved 97% accuracy and F1 score in the test data, and the average trigger error was less than 2.20%. In the first trial, 10 volunteers were enrolled. In "Compliance" mode, 78.6% of the triggering by the LSTM model was compatible with neuronal respiration, which was higher than Auto-Trak model (74.2%). Auto-Trak model performed marginally better in the modes of pressure support = 5 and 10 cmH2O. Considering the success in the first clinical trial, we further tested the models by including five patients with acute respiratory distress syndrome (ARDS). The LSTM model exhibited 60.6% of the triggering in the 33%-box, which is better than 49.0% of Auto-Trak model. And the PVI index of the LSTM model was significantly less than Auto-Trak model (36.5% vs 52.9%). Conclusions: Overall, the LSTM model performed comparable to, or even better than, Auto-Trak model in both latency and PVI index. While other mathematical models have been developed, our model was effectively embedded in the chip to control the triggering of ventilator. Trial registration: Approval Number: 2023ZDSYLL348-P01; Approval Date: 28/09/2023. Clinical Trial Registration Number: ChiCTR2500097446; Registration Date: 19/02/2025.
Jiang, Q.; Ke, Y.; Sinisterra, L. G.; Elangovan, K.; Li, Z.; Yeo, K. K.; Jonathan, Y.; Ting, D. S. W.
Show abstract
Coronary artery disease is a leading cause of morbidity and mortality. Invasive coronary angiography is currently the gold standard in disease diagnosis. Several studies have attempted to use artificial intelligence (AI) to automate their interpretations with varying levels of success. However, most existing studies cannot generate detailed angiographic reports beyond simple classification or segmentation. This study aims to fine-tune and evaluate the performance of a Vision-Language Model (VLM) in coronary angiogram interpretation and report generation. Using twenty-thousand angiogram keyframes of 1987 patients collated across four unique datasets, we finetuned InternVL2-4B model with Low-Rank Adaptor weights that can perform stenosis detection, anatomy labelling, and report generation. The fine-tuned VLM achieved a precision of 0.56, recall of 0.64, and F1-score of 0.60 for stenosis detection. In anatomy segmentation, it attained a weighted precision of 0.50, recall of 0.43, and F1-score of 0.46, with higher scores in major vessel segments. Report generation integrating multiple angiographic projection views yielded an accuracy of 0.42, negative predictive value of 0.58 and specificity of 0.52. This study demonstrates the potential of using VLM to streamline angiogram interpretation to rapidly provide actionable information to guide management, support care in resource-limited settings, and audit the appropriateness of coronary interventions. AUTHOR SUMMARYCoronary artery disease has heavy disease burden worldwide and coronary angiogram is the gold standard imaging for its diagnosis. Interpreting these complex images and producing clinical reports require significant expertise and time. In this study, we fine-tuned and investigated an open-source VLM, InternVL2-4B, to interpret and report coronary angiogram images in key tasks including stenosis detection, anatomy identification, as well as full report generation. We also referenced the fine-tuned InternVL2-4B against state-of-the-art segmentation model, YOLOv8x, which was evaluated on the same test sets. We examined how machine learning metrics like the intersection over union score may not fully capture the clinical accuracy of model predictions and discussed the limitations of relying solely on these metrics for evaluating clinical AI systems. Although the model has not yet achieved expert-level interpretation, our results demonstrate the potential and feasibility of automating the reporting of coronary angiograms. Such systems could potentially assist cardiologists by improving reporting efficiency, highlightning lesions that may require review, and enabling automated calculations of clinical scores such as the SYNTAX score.
Dong, Y.; Fang, G.; Du, R.; Hu, H.; Fang, Z.; Guo, C.; Lu, R.; Jia, Y.; Tian, Y.; Wang, Z.
Show abstract
IntroductionTo propose an improved U-Net-based segmentation model for colorectal polyp segmentation, aiming to address the challenges of variable lesion morphology, ambiguous boundaries, complex background interference, and insufficient cross-level feature fusion in endoscopic images [5,12]. MethodsAn improved network termed MCA-UNet was developed based on U-Net [5]. The model incorporates a multi-scale context convolution block (MCCB) to enhance multi-scale feature extraction and an attention-guided feature fusion module (AGFF) to optimize skip-feature selection and fusion in the decoder. Experiments were conducted on publicly available colorectal polyp image datasets, including Kvasir-SEG and CVC-ClinicDB [13-15]. Four models, including U-Net, U-Net+MCCB, U-Net+AGFF, and MCA-UNet, were compared, and all models were trained for 100 epochs. Dice, intersection over union (IoU), and mean absolute error (MAE) were used as the main evaluation metrics [20]. ResultsOn the mixed validation set, the Dice scores of U-Net, U-Net+MCCB, U-Net+AGFF, and MCA-UNet were 0.742, 0.771, 0.754, and 0.783, respectively; the corresponding IoU values were 0.603, 0.635, 0.618, and 0.649; and the MAE values were 0.102, 0.090, 0.097, and 0.086. Compared with the baseline U-Net, MCA-UNet improved Dice and IoU by 5.53% and 7.63%, respectively, while reducing MAE by 15.69%. Comparisons on the Kvasir-SEG and CVC-ClinicDB validation subsets further demonstrated the more stable performance of the proposed model. ConclusionBy jointly integrating multi-scale contextual modeling and attention-guided feature fusion, MCA-UNet effectively improves the accuracy and robustness of colorectal polyp segmentation and may provide useful support for intelligent endoscopic image analysis [12,17,18].
Protserov, S.; Repalo, A.; Mashouri, P.; Hunter, J.; Masino, C.; Madani, A.; Brudno, M.
Show abstract
Machine learning models have seen a lot of success in medical image segmentation domain. However, one of the challenges that they face are confounders or shortcuts: spurious correlations or biases in the training data that affect the resulting models. One example of such confounders for surgical machine learning is the setup of surgical equipment, including tools and lighting. Using the task of identification of safe and dangerous zones of dissection in laparoscopic cholecystectomy images and videos as a use-case, we inspect two equipment-induced biases: the presence of surgical tools in the field of view and the position of lighting. We propose methods for evaluating the severity of these biases and augmentation-based methods for mitigating them. We show that our tool bias mitigations improve the models' consistency under tool movements by 9 percentage points in the most inconsistent cases, and by 4 percentage points on average. Our lighting bias mitigations help reduce fraction of true dangerous zone pixels that may be predicted as safe under light changes from 5% to 1.5%, without compromising segmentation quality.
YILDIZ, O.; Subasi, A.
Show abstract
Stress detection with wearable physiological sensors is vital in digital health and affective computing. Conventional machine learning techniques usually examine physiological signals separately, missing the intricate inter-signal connections involved in the human stress response. While deep neural networks offer high accuracy, they function as black boxes, offering minimal understanding of the physiological processes behind stress detection. This study introduces a hierarchical graph neural network framework for WESAD stress detection, establishing a methodology for affective computing that emphasizes interpretability and extensibility while maintaining strong predictive performance. We proposed PAMG-AT (Physiological Attention Multi-Graph with Adaptive Topology) which is a hierarchical graph neural network architecture, for stress detection using multimodal physiological signals. In this framework, physiological features serve as nodes within a knowledge-driven graph, while edges represent established physiological relationships, including cardiac-electrodermal coupling and cardio-respiratory interaction. The architecture employs a three-level attention mechanism: spatial encoding via Graph Attention Networks (GAT) to assess feature importance, temporal modeling with a Transformer to capture dynamics across time windows, and global pooling for classification. The model is evaluated using three sensor configurations (chest-only, wrist-only, and hybrid) on the WESAD dataset, employing rigorous Leave-One-Subject-Out (LOSO) cross-validation. PAMG-AT achieves competitive performance, with 94.59% accuracy ({+/-}6.8%) for chest sensors, 91.76% ({+/-}9.2%) for wrist sensors, and 92.80% ({+/-}8.33%) for the hybrid configuration. The proposed method provides interpretability via attention weights, revealing that ECG-EDA relationships (cardiac-electrodermal coupling) are most predictive of stress. Three low-responder subjects (S2, S3, S9) with atypical physiological stress patterns demonstrate lower accuracy (81-87%), offering clinically valuable insights for personalized stress management. The effective wrist-only configuration, achieving 91.76% accuracy, supports practical deployment in consumer wearables.
Zoofaghari, M.; Rahaimifard, A.; Chatterjee, S.; Balasingham, I.
Show abstract
Goal-oriented semantic communication has recently emerged in wireless sensor-actuator networks, emphasizing the meaning and relevance of information over raw data delivery, thereby enabling resource-efficient telecommunication. This paradigm offers significant benefits for intra-body or implantable sensor-actuator networks, including dramatic reductions in bandwidth requirements, latency, and power consumption. In this paper, we address a patch-based energy-efficient anomaly detection method for smart capsule endoscopy. We propose a deep learningbased algorithm that employs the similarity between features extracted from measured images and a reference (normal) image as the detection metric. The algorithm is evaluated using a clinical dataset of capsule-captured images, combined with a simulated intra-body channel model. The results demonstrate that even with only 60% of the transmission power (relative to a standard link design for QPSK modulation) and 65% of the light intensity, the probability of anomaly detection remains above 85%, and it gradually improves as power and illumination levels increase. This improvement translates into a potential battery life extension of over 43%. The findings highlight the potential of semanticaware, energy-efficient intra-body devices for more sustainable and effective medical interventions.
Yee, N. J.; Soenjaya, Y.; Kates Rose, N.; Atinga, A.; Demore, C.; Halai, M.; Whyne, C.; Hardisty, M.
Show abstract
Falls among older adults can result in hip fractures that requires x-ray based assessment at emergency department (ED). Only 25.7% of patients presenting to EDs are diagnosed with a hip fracture, as such improved diagnosis prior to transportation to hospital could result in fewer hospital visits and improved triaging. Patient with hip fracture could be immediately directed to centres with orthopaedic surgeons, allowing for reduced time-to-surgery, particularly in rural communities. Ultrasound (US) imaging is portable and can identify fractures but requires expertise, particularly related to image interpretation. Deep learning may reduce operator dependence by automating image interpretation. This study aims to develop HipSAFE, a hip fracture detection tool on US, to support triaging by nurses and paramedics. We hypothesize that diagnostic accuracy will be comparable to pelvic x-ray diagnostic performance in a preclinical study. Bilateral hind limbs of 15 porcine cadavers were imaged by US-naive operators before and after an iatrogenic hip fracture. The limbs were divided into training, validation, and test (8 femurs) sets. The training data were augmented (geometric and photometric transformations). The models included MobileNetV3 (S/L), EfficientNet-Lite (0-2), and ResNet (18/50). Using a moving average aggregation on the operator cine clips, EfficientNet-Lite0 achieved the highest performance (F1=0.944 [95% CI:0.880-0.987]; sensitivity=89.5% [78.6-97.5%]; specificity = 100.0% [100.0-100.0]). The majority voting ensemble model ranked second (F1=0.932 [0.857-0.984]). Naive operators and radiologists had lower performance (F1=0.667 [0.596-0.758] and 0.685 [0.597-0.729]). This pre-clinical study demonstrated that HipSAFE has excellent diagnostic accuracy and there may be a role for US in improving hip trauma triaging, especially for rural and resource-constrained environments.
Mahmoudi, A.; Firouzi, V.; Rinderknecht, S.; Seyfarth, A.; Sharbafi, M. A.
Show abstract
Optimizing assistive wearable devices is crucial for their efficacy and user adoption, yet state-of-the-art methods like Human-in-the-Loop Optimization (HILO) and biomechanical modeling face limitations. HILO is time-consuming and often restricted to optimizing control parameters, while inverse dynamics assumes invariant kinematics, which is unreliable for adaptive human-device interaction. Predictive simulation offers a powerful alternative, enabling computational exploration of design spaces. However, existing approaches often lack systematic optimization frameworks and rigorous validation against experimental data. To address this, we developed a Design Optimization Platform that integrates predictive simulations within a two-level optimization structure for personalizing assistive device design. This paper primarily validates the platforms predictive simulations against a publicly available dataset of the passive Biarticular Thigh Exosuit (BATEX), assessing its reliability. Our findings show that the model can sufficiently predict the kinematics and major muscle activations, except for the pelvis tilt and some biarticular muscles. The key finding is that successful identification of personalized optimal BATEX stiffness parameters needs acceptable prediction of metabolic cost trends, not their precise values. Our analysis further reveals that the models accuracy in predicting Vasti muscle activation in the baseline condition is a significant indicator of its success in predicting metabolic cost trends. This demonstrates that accurate prediction of performance trends is more important for effective simulation-based design optimization than perfect biomechanical accuracy, advancing targeted and efficient assistive device development.
Chuma, A. T.; Youssef, A. S.; Asmare, M. H.; Wang, C.; Kassie, D. M.; Voigt, J.-U.; Vanrumste, B.
Show abstract
Reliable interpretation of electrocardiograms (ECGs) requires precise identification of P, QRS, and T (PQRST) wave boundaries. However, it remains challenging due to noise, signal quality variability, and inherent morphological diversity particularly in recordings from children. This study systematically compares the performance of leading deep neural networks (DNN) and heuristic-based delineation algorithms on ambulatory single-lead ECG signals focusing on temporal accuracy. Experiments were conducted using the publicly available LUDB dataset and a private validation dataset comprising 21,759 annotated single-lead wave segments from 611 children recorded using KardiaMobile ECG sensor. DNN were first trained on the LUDB dataset and subsequently tested on the validation dataset. The delineation performance was assessed using Sensitivity (Se) and positive-predictive-value (P+) metrics. The best-performing heuristic based and DNN models reached Se and P+ of (98.9% vs 97.9%) for P, (99.8% vs 99.2%) for QRS, and (98.7% vs 95.9%) for T wave fiducials, respectively. The lowest standard-deviation (in ms) of wave onset/offset delineation was achieved by attention based 1DU-Net model; {+/-}16.6/{+/-}16.3 for P-wave, {+/-}14.0/{+/-}16.3 for QRS, and {+/-}26.3/{+/-}18.8 for T-wave, respectively. The findings indicate that optimized heuristic models can perform comparably to complex DNN, highlighting their efficiency and suitability for real-time ECG delineation in digital health monitoring applications.
Chen, Z.; Hu, T.; Haddadin, S.; Franklin, D.
Show abstract
There is more to musculotendon path modeling than aligning a cable to reflect the geometric features of a muscle-tendon unit. From the perspective of simulation accuracy, the key is to replicate the length- and moment arm-joint angle relations of the target muscle. In this study, we propose an effect-oriented approach of automated path modeling, via the hybrid calibration based on muscle surface mesh and moment arm. The task is formulated as an optimization problem with a threefold objective for the path to: 1) pass through multiple ellipses representing muscle cross-sections, 2) yield moment arms that match experimental measurements, and 3) yield moment arms with the designated signs. The performance of our optimization framework is demonstrated with the musculoskeletal surface mesh from the Visible Human Male and moment arm datasets from literature--producing 42 paths that are anatomically realistic and biomechanically accurate in 20.1 min. Our optimization framework is gradient-specified, which is faster and more accurate than using the default numerical gradient, making it applicable for large-scale subject-specific uses.